尽管大型语言模型(LLMS)经常产生令人印象深刻的输出,但它们也无法推理和事实。我们着手研究这些限制如何影响LLM回答基于困难的问题的能力和理由。我们应用了与人类一致的GPT-3(指示程序)回答多项选择体检问题(USMLE和MEDMCQA)和医学研究问题(PubMedQA)。我们调查了思想链(逐步思考)提示,接地(通过搜索结果增强提示)和很少的射击(以问答的示例来准备问题)。对于USMLE问题的子集,医疗领域专家审查并注释了模型的推理。总体而言,GPT-3取得了最新的机器学习绩效的重大改进。我们观察到GPT-3通常是知识渊博的,并且可以理解医疗问题。 GPT-3当面对一个无法回答的问题时,仍将尝试回答,通常会导致偏见的预测分布。 LLM与人类绩效不相同,但我们的结果表明,与医疗问题解决的推理模式的出现。我们推测,缩放模型和数据,增强及时对齐方式以及允许更好地完成完成的上下文化将足以使LLMS在此类任务上达到人级的性能。
translated by 谷歌翻译
While the capabilities of autonomous systems have been steadily improving in recent years, these systems still struggle to rapidly explore previously unknown environments without the aid of GPS-assisted navigation. The DARPA Subterranean (SubT) Challenge aimed to fast track the development of autonomous exploration systems by evaluating their performance in real-world underground search-and-rescue scenarios. Subterranean environments present a plethora of challenges for robotic systems, such as limited communications, complex topology, visually-degraded sensing, and harsh terrain. The presented solution enables long-term autonomy with minimal human supervision by combining a powerful and independent single-agent autonomy stack, with higher level mission management operating over a flexible mesh network. The autonomy suite deployed on quadruped and wheeled robots was fully independent, freeing the human supervision to loosely supervise the mission and make high-impact strategic decisions. We also discuss lessons learned from fielding our system at the SubT Final Event, relating to vehicle versatility, system adaptability, and re-configurable communications.
translated by 谷歌翻译
Masked Image Modelling (MIM) has been shown to be an efficient self-supervised learning (SSL) pre-training paradigm when paired with transformer architectures and in the presence of a large amount of unlabelled natural images. The combination of the difficulties in accessing and obtaining large amounts of labeled data and the availability of unlabelled data in the medical imaging domain makes MIM an interesting approach to advance deep learning (DL) applications based on 3D medical imaging data. Nevertheless, SSL and, in particular, MIM applications with medical imaging data are rather scarce and there is still uncertainty. around the potential of such a learning paradigm in the medical domain. We study MIM in the context of Prostate Cancer (PCa) lesion classification with T2 weighted (T2w) axial magnetic resonance imaging (MRI) data. In particular, we explore the effect of using MIM when coupled with convolutional neural networks (CNNs) under different conditions such as different masking strategies, obtaining better results in terms of AUC than other pre-training strategies like ImageNet weight initialization.
translated by 谷歌翻译
Research connecting text and images has recently seen several breakthroughs, with models like CLIP, DALL-E 2, and Stable Diffusion. However, the connection between text and other visual modalities, such as lidar data, has received less attention, prohibited by the lack of text-lidar datasets. In this work, we propose LidarCLIP, a mapping from automotive point clouds to a pre-existing CLIP embedding space. Using image-lidar pairs, we supervise a point cloud encoder with the image CLIP embeddings, effectively relating text and lidar data with the image domain as an intermediary. We show the effectiveness of LidarCLIP by demonstrating that lidar-based retrieval is generally on par with image-based retrieval, but with complementary strengths and weaknesses. By combining image and lidar features, we improve upon both single-modality methods and enable a targeted search for challenging detection scenarios under adverse sensor conditions. We also use LidarCLIP as a tool to investigate fundamental lidar capabilities through natural language. Finally, we leverage our compatibility with CLIP to explore a range of applications, such as point cloud captioning and lidar-to-image generation, without any additional training. We hope LidarCLIP can inspire future work to dive deeper into connections between text and point cloud understanding. Code and trained models available at https://github.com/atonderski/lidarclip.
translated by 谷歌翻译
Motion prediction systems aim to capture the future behavior of traffic scenarios enabling autonomous vehicles to perform safe and efficient planning. The evolution of these scenarios is highly uncertain and depends on the interactions of agents with static and dynamic objects in the scene. GNN-based approaches have recently gained attention as they are well suited to naturally model these interactions. However, one of the main challenges that remains unexplored is how to address the complexity and opacity of these models in order to deal with the transparency requirements for autonomous driving systems, which includes aspects such as interpretability and explainability. In this work, we aim to improve the explainability of motion prediction systems by using different approaches. First, we propose a new Explainable Heterogeneous Graph-based Policy (XHGP) model based on an heterograph representation of the traffic scene and lane-graph traversals, which learns interaction behaviors using object-level and type-level attention. This learned attention provides information about the most important agents and interactions in the scene. Second, we explore this same idea with the explanations provided by GNNExplainer. Third, we apply counterfactual reasoning to provide explanations of selected individual scenarios by exploring the sensitivity of the trained model to changes made to the input data, i.e., masking some elements of the scene, modifying trajectories, and adding or removing dynamic agents. The explainability analysis provided in this paper is a first step towards more transparent and reliable motion prediction systems, important from the perspective of the user, developers and regulatory agencies. The code to reproduce this work is publicly available at https://github.com/sancarlim/Explainable-MP/tree/v1.1.
translated by 谷歌翻译
We introduce a new benchmark dataset, Placenta, for node classification in an underexplored domain: predicting microanatomical tissue structures from cell graphs in placenta histology whole slide images. This problem is uniquely challenging for graph learning for a few reasons. Cell graphs are large (>1 million nodes per image), node features are varied (64-dimensions of 11 types of cells), class labels are imbalanced (9 classes ranging from 0.21% of the data to 40.0%), and cellular communities cluster into heterogeneously distributed tissues of widely varying sizes (from 11 nodes to 44,671 nodes for a single structure). Here, we release a dataset consisting of two cell graphs from two placenta histology images totalling 2,395,747 nodes, 799,745 of which have ground truth labels. We present inductive benchmark results for 7 scalable models and show how the unique qualities of cell graphs can help drive the development of novel graph neural network architectures.
translated by 谷歌翻译
本文通过讨论参加了为期三年的SubT竞赛的六支球队的不同大满贯策略和成果,报道了地下大满贯的现状。特别是,本文有四个主要目标。首先,我们审查团队采用的算法,架构和系统;特别重点是以激光雷达以激光雷达为中心的SLAM解决方案(几乎所有竞争中所有团队的首选方法),异质的多机器人操作(包括空中机器人和地面机器人)和现实世界的地下操作(从存在需要处理严格的计算约束的晦涩之处)。我们不会回避讨论不同SubT SLAM系统背后的肮脏细节,这些系统通常会从技术论文中省略。其次,我们通过强调当前的SLAM系统的可能性以及我们认为与一些良好的系统工程有关的范围来讨论该领域的成熟度。第三,我们概述了我们认为是基本的开放问题,这些问题可能需要进一步的研究才能突破。最后,我们提供了在SubT挑战和相关工作期间生产的开源SLAM实现和数据集的列表,并构成了研究人员和从业人员的有用资源。
translated by 谷歌翻译
人类使用未知的相似性函数在未标记的数据集中天生测量实例之间的距离。距离指标只能作为相似实例信息检索相似性的代理。从人类注释中学习良好的相似性功能可以提高检索的质量。这项工作使用深度度量学习来从很少的大型足球轨迹数据集中学习这些用户定义的相似性功能。我们将基于熵的活跃学习方法从三重矿山开采进行了最新的工作,以收集易于招募的人,但仍来自人类参与者提供信息的注释,并利用它们来训练深度卷积网络,以概括为看不见的样本。我们的用户研究表明,与以前依赖暹罗网络的深度度量学习方法相比,我们的方法提高了信息检索的质量。具体而言,我们通过分析参与者的响应效率来阐明被动抽样启发式方法和主动学习者的优势和缺点。为此,我们收集准确性,算法时间的复杂性,参与者的疲劳和时间响应,定性自我评估和陈述以及混合膨胀注释者的影响及其对模型性能和转移学习的一致性。
translated by 谷歌翻译
蒙面自动编码已成为用于文本,图像和最近的点云的变压器模型的成功预训练范例。原始汽车数据集是适合自我监督预训练的合适候选者,因为与3D对象检测(OD)等任务的注释相比,它们通常便宜地收集。但是,开发点云的蒙版自动编码器仅关注合成和室内数据。因此,现有方法已将其表示和模型定制为小,密度且具有均匀点密度的点云。在这项工作中,我们在汽车环境中研究了蒙版的自动编码,该自动编码是稀疏的,并且点密度在同一场景中的对象之间可能会大不相同。为此,我们提出了Voxel-MAE,这是一种为体素表示设计的简单掩盖自动编码预训练方案。我们将基于变压器的3D对象检测器的骨干培养为重建掩盖的体素并区分空的和非空的体素。我们的方法将3D OD性能提高了1.75个地图点和1.05 nds的NUSCENES数据集。与现有的汽车数据自我监督方法相比,Voxel-Mae显示出$ 2 \ times $ $的性能提高。此外,我们表明,通过对Voxel-Mae进行预训练,我们仅需要40%的注释数据即可超过随机初始化的等效物。代码将发布。
translated by 谷歌翻译
准确的不确定性估计对于在安全关键系统中部署深层对象探测器至关重要。概率对象探测器的开发和评估受到现有绩效指标的缺点的阻碍,这些绩效指标倾向于涉及任意阈值或限制检测器的分布选择。在这项工作中,我们建议将对象检测视为设置预测任务,其中检测器预测对象集的分布。使用负面的对数可能性进行随机有限集,我们提出了一个适当的评分规则,用于评估和训练概率对象探测器。所提出的方法可以应用于现有的概率检测器,没有阈值,并可以在体系结构之间进行公平的比较。在可可数据集上评估了三种不同类型的检测器。我们的结果表明,现有检测器的培训已针对非稳定指标进行了优化。我们希望鼓励开发新的对象探测器,这些探测器可以准确估计自己的不确定性。代码可在https://github.com/georghess/pmb-nll上找到。
translated by 谷歌翻译